Goto

Collaborating Authors

 dialogue sequence


Enhanced Classroom Dialogue Sequences Analysis with a Hybrid AI Agent: Merging Expert Rule-Base with Large Language Models

Long, Yun, Zhang, Yu

arXiv.org Artificial Intelligence

Classroom dialogue plays a crucial role in fostering student engagement and deeper learning. However, analysing dialogue sequences has traditionally relied on either theoretical frameworks or empirical descriptions of practice, with limited integration between the two. This study addresses this gap by developing a comprehensive rule base of dialogue sequences and an Artificial Intelligence (AI) agent that combines expert-informed rule-based systems with a large language model (LLM). The agent applies expert knowledge while adapting to the complexities of natural language, enabling accurate and flexible categorisation of classroom dialogue sequences. By synthesising findings from over 30 studies, we established a comprehensive framework for dialogue analysis. The agent was validated against human expert coding, achieving high levels of precision and reliability. The results demonstrate that the agent provides theory-grounded and adaptive functions, tremendously enhancing the efficiency and scalability of classroom dialogue analysis, offering significant potential in improving classroom teaching practices and supporting teacher professional development.


Resolving Positional Ambiguity in Dialogues by Vision-Language Models for Robot Navigation

Chen, Kuan-Lin, Wei, Tzu-Ti, Yeh, Li-Tzu, Kao, Elaine, Tseng, Yu-Chee, Chen, Jen-Jee

arXiv.org Artificial Intelligence

We consider an autonomous navigation robot that can accept human commands through natural language to provide services in an indoor environment. These natural language commands may include time, position, object, and action components. However, we observe that the positional components within such commands usually refer to objects in the environment that may contain different levels of positional ambiguity. For example, the command "Go to the chair!" may be ambiguous when there are multiple chairs of the same type in a room. In order to disambiguate these commands, we employ a large language model and a large vision-language model to conduct multiple turns of conversations with the user. We propose a two-level approach that utilizes a vision-language model to map the meanings in natural language to a unique object ID in images and then performs another mapping from the unique object ID to a 3D depth map, thereby allowing the robot to navigate from its current position to the target position. To the best of our knowledge, this is the first work linking foundation models to the positional ambiguity issue.